768 research outputs found

    Improved Parallel Rabin-Karp Algorithm Using Compute Unified Device Architecture

    Full text link
    String matching algorithms are among one of the most widely used algorithms in computer science. Traditional string matching algorithms efficiency of underlaying string matching algorithm will greatly increase the efficiency of any application. In recent years, Graphics processing units are emerged as highly parallel processor. They out perform best of the central processing units in scientific computation power. By combining recent advancement in graphics processing units with string matching algorithms will allows to speed up process of string matching. In this paper we proposed modified parallel version of Rabin-Karp algorithm using graphics processing unit. Based on that, result of CPU as well as parallel GPU implementations are compared for evaluating effect of varying number of threads, cores, file size as well as pattern size.Comment: Information and Communication Technology for Intelligent Systems (ICTIS 2017

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    More Than 1700 Years of Word Equations

    Full text link
    Geometry and Diophantine equations have been ever-present in mathematics. Diophantus of Alexandria was born in the 3rd century (as far as we know), but a systematic mathematical study of word equations began only in the 20th century. So, the title of the present article does not seem to be justified at all. However, a linear Diophantine equation can be viewed as a special case of a system of word equations over a unary alphabet, and, more importantly, a word equation can be viewed as a special case of a Diophantine equation. Hence, the problem WordEquations: "Is a given word equation solvable?" is intimately related to Hilbert's 10th problem on the solvability of Diophantine equations. This became clear to the Russian school of mathematics at the latest in the mid 1960s, after which a systematic study of that relation began. Here, we review some recent developments which led to an amazingly simple decision procedure for WordEquations, and to the description of the set of all solutions as an EDT0L language.Comment: The paper will appear as an invited address in the LNCS proceedings of CAI 2015, Stuttgart, Germany, September 1 - 4, 201

    Shortest paths in nearly conservative digraphs

    Get PDF
    We introduce the following notion: a digraph D = (V, A) with arc weights c: A → R is called nearly conservative if every negative cycle consists of two arcs. Computing shortest paths in nearly conservative digraphs is NP-hard, and even deciding whether a digraph is nearly conservative is coNP-complete. We show that the “All Pairs Shortest Path” problem is fixed parameter tractable with various parameters for nearly conservative digraphs. The results also apply for the special case of conservative mixed graphs

    Genetic Algorithm with Optimal Recombination for the Asymmetric Travelling Salesman Problem

    Full text link
    We propose a new genetic algorithm with optimal recombination for the asymmetric instances of travelling salesman problem. The algorithm incorporates several new features that contribute to its effectiveness: (i) Optimal recombination problem is solved within crossover operator. (ii) A new mutation operator performs a random jump within 3-opt or 4-opt neighborhood. (iii) Greedy constructive heuristic of W.Zhang and 3-opt local search heuristic are used to generate the initial population. A computational experiment on TSPLIB instances shows that the proposed algorithm yields competitive results to other well-known memetic algorithms for asymmetric travelling salesman problem.Comment: Proc. of The 11th International Conference on Large-Scale Scientific Computations (LSSC-17), June 5 - 9, 2017, Sozopol, Bulgari

    Maximizing total job value on a single machine with job selection

    Get PDF
    This paper describes a single machine scheduling problem of maximizing total job value with a machine availability constraint. The value of each job decreases over time in a stepwise fashion. Several solution properties of the problem are developed. Based on the properties, a branch-and-bound algorithm and a heuristic algorithm are derived. These algorithms are evaluated in the computational study and the results show that the heuristic algorithm provides effective solutions within short computation times

    Probabilistic Analysis of Optimization Problems on Generalized Random Shortest Path Metrics

    Get PDF
    Simple heuristics often show a remarkable performance in practice for optimization problems. Worst-case analysis often falls short of explaining this performance. Because of this, "beyond worst-case analysis" of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained by Bringmann et al. (Algorithmica, 2013), who have used random shortest path metrics on complete graphs to analyze heuristics. The goal of this paper is to generalize these findings to non-complete graphs, especially Erd\H{o}s-R\'enyi random graphs. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances, we prove that the greedy heuristic for the minimum distance maximum matching problem, the nearest neighbor and insertion heuristics for the traveling salesman problem, and a trivial heuristic for the kk-median problem all achieve a constant expected approximation ratio. Additionally, we show a polynomial upper bound for the expected number of iterations of the 2-opt heuristic for the traveling salesman problem.Comment: An extended abstract appeared in the proceedings of WALCOM 201

    Longest Common Extensions in Sublinear Space

    Get PDF
    The longest common extension problem (LCE problem) is to construct a data structure for an input string TT of length nn that supports LCE(i,j)(i,j) queries. Such a query returns the length of the longest common prefix of the suffixes starting at positions ii and jj in TT. This classic problem has a well-known solution that uses O(n)O(n) space and O(1)O(1) query time. In this paper we show that for any trade-off parameter 1τn1 \leq \tau \leq n, the problem can be solved in O(nτ)O(\frac{n}{\tau}) space and O(τ)O(\tau) query time. This significantly improves the previously best known time-space trade-offs, and almost matches the best known time-space product lower bound.Comment: An extended abstract of this paper has been accepted to CPM 201

    Coalition Resilient Outcomes in Max k-Cut Games

    Full text link
    We investigate strong Nash equilibria in the \emph{max kk-cut game}, where we are given an undirected edge-weighted graph together with a set {1,,k}\{1,\ldots, k\} of kk colors. Nodes represent players and edges capture their mutual interests. The strategy set of each player vv consists of the kk colors. When players select a color they induce a kk-coloring or simply a coloring. Given a coloring, the \emph{utility} (or \emph{payoff}) of a player uu is the sum of the weights of the edges {u,v}\{u,v\} incident to uu, such that the color chosen by uu is different from the one chosen by vv. Such games form some of the basic payoff structures in game theory, model lots of real-world scenarios with selfish agents and extend or are related to several fundamental classes of games. Very little is known about the existence of strong equilibria in max kk-cut games. In this paper we make some steps forward in the comprehension of it. We first show that improving deviations performed by minimal coalitions can cycle, and thus answering negatively the open problem proposed in \cite{DBLP:conf/tamc/GourvesM10}. Next, we turn our attention to unweighted graphs. We first show that any optimal coloring is a 5-SE in this case. Then, we introduce xx-local strong equilibria, namely colorings that are resilient to deviations by coalitions such that the maximum distance between every pair of nodes in the coalition is at most xx. We prove that 11-local strong equilibria always exist. Finally, we show the existence of strong Nash equilibria in several interesting specific scenarios.Comment: A preliminary version of this paper will appear in the proceedings of the 45th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM'19

    Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm

    Get PDF
    The minimum-cost flow (MCF) problem is a fundamental optimization problem with many applications and seems to be well understood. Over the last half century many algorithms have been developed to solve the MCF problem and these algorithms have varying worst-case bounds on their running time. However, these worst-case bounds are not always a good indication of the algorithms' performance in practice. The Network Simplex (NS) algorithm needs an exponential number of iterations for some instances, but it is considered the best algorithm in practice and performs best in experimental studies. On the other hand, the Minimum-Mean Cycle Canceling (MMCC) algorithm is strongly polynomial, but performs badly in experimental studies. To explain these differences in performance in practice we apply the framework of smoothed analysis. We show an upper bound of O(mn2log(n)log(ϕ))O(mn^2\log(n)\log(\phi)) for the number of iterations of the MMCC algorithm. Here nn is the number of nodes, mm is the number of edges, and ϕ\phi is a parameter limiting the degree to which the edge costs are perturbed. We also show a lower bound of Ω(mlog(ϕ))\Omega(m\log(\phi)) for the number of iterations of the MMCC algorithm, which can be strengthened to Ω(mn)\Omega(mn) when ϕ=Θ(n2)\phi=\Theta(n^2). For the number of iterations of the NS algorithm we show a smoothed lower bound of Ω(mmin{n,ϕ}ϕ)\Omega(m \cdot \min \{ n, \phi \} \cdot \phi).Comment: Extended abstract to appear in the proceedings of COCOON 201
    corecore